Pain is a personal, subjective experience that is commonly evaluated throughvisual analog scales (VAS). While this is often convenient and useful,automatic pain detection systems can reduce pain score acquisition efforts inlarge-scale studies by estimating it directly from the participants' facialexpressions. In this paper, we propose a novel two-stage learning approach forVAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs)to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levelsfrom face images. The estimated scores are then fed into the personalizedHidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided byeach person. Personalization of the model is performed using a newly introducedfacial expressiveness score, unique for each person. To the best of ourknowledge, this is the first approach to automatically estimate VAS from faceimages. We show the benefits of the proposed personalized over traditionalnon-personalized approach on a benchmark dataset for pain analysis from faceimages.
展开▼